了解潮汐能流中鱼类的丰度和分布对于评估通过向栖息地引入潮汐能设备所带来的风险很重要。但是,适合潮汐能的潮汐电流流量通常是高度湍流的,这使回声器数据的解释变得复杂。必须从用于生物分析的数据中排除受夹带空气回报污染的水柱的部分。应用单个常规算法来识别夹带的空气的深度不足,对于不连续,深度动态,多孔的边界而言,随着潮流流速而变化。使用Fundy湾的潮汐能示威场所进行的案例研究,我们描述了具有基于U-NET的体系结构的深机学习模型的开发和应用。我们的模型Echofilter对湍流条件的动态范围高度响应,并且对边界位置的细微差别敏感,产生了夹带的空气边界线,在移动下降方面的平均误差为0.33亿,并且在移动下降范围内为0.5-1.5-1.0m关于固定的上调数据,不到现有算法解决方案的一半。该模型的整体注释与人类细分有很高的一致性,移动下降记录的联合会得分为99%,而固定的上方录音记录为92-95%。与手动编辑当前可用算法所需的线路位置所需的时间相比,手动编辑所需的时间减少了50%。由于最初的自动放置的改进,模型的实现允许提高线路位置的标准化和可重复性。
translated by 谷歌翻译
我们通过应用更为理论证明的操作员来寻求改善神经网络中的汇集操作。我们证明Logsumexp提供了用于登录的自然或操作员。当一个人对池中汇集运算符中的元素数正确时,这将成为$ \ text {logavgexp}:= \ log(\ text {mean}(\ exp(x)))$。通过引入单个温度参数,LogavgeXP将其操作数的最大值平滑地过渡到平均值(在限制性情况下发现$ 0 ^ + $和$ t \ to + \ idty $)。在各种深度神经网络架构中,我们在实验测试的LogavgeXP,无论是没有学习的温度参数,都在电脑视觉中的各种深度神经网络架构中。
translated by 谷歌翻译
The choice of activation functions and their motivation is a long-standing issue within the neural network community. Neuronal representations within artificial neural networks are commonly understood as logits, representing the log-odds score of presence of features within the stimulus. We derive logit-space operators equivalent to probabilistic Boolean logic-gates AND, OR, and XNOR for independent probabilities. Such theories are important to formalize more complex dendritic operations in real neurons, and these operations can be used as activation functions within a neural network, introducing probabilistic Boolean-logic as the core operation of the neural network. Since these functions involve taking multiple exponents and logarithms, they are computationally expensive and not well suited to be directly used within neural networks. Consequently, we construct efficient approximations named $\text{AND}_\text{AIL}$ (the AND operator Approximate for Independent Logits), $\text{OR}_\text{AIL}$, and $\text{XNOR}_\text{AIL}$, which utilize only comparison and addition operations, have well-behaved gradients, and can be deployed as activation functions in neural networks. Like MaxOut, $\text{AND}_\text{AIL}$ and $\text{OR}_\text{AIL}$ are generalizations of ReLU to two-dimensions. While our primary aim is to formalize dendritic computations within a logit-space probabilistic-Boolean framework, we deploy these new activation functions, both in isolation and in conjunction to demonstrate their effectiveness on a variety of tasks including image classification, transfer learning, abstract reasoning, and compositional zero-shot learning.
translated by 谷歌翻译